Goto

Collaborating Authors

 child abuse content


Apple to Scan Every Device for Child Abuse Content -- But Experts Fear for Privacy

#artificialintelligence

Apple on Thursday said it's introducing new child safety features in iOS, iPadOS, watchOS, and macOS as part of its efforts to limit the spread of Child Sexual Abuse Material (CSAM) in the U.S. To that effect, the iPhone maker said it intends to begin client-side scanning of images shared via every Apple device for known child abuse content as they are being uploaded into iCloud Photos, in addition to leveraging on-device machine learning to vet all iMessage images sent or received by minor accounts (aged under 13) to warn parents of sexually explicit photos shared over the messaging platform. Furthermore, Apple also plans to update Siri and Search to stage an intervention when users try to perform searches for CSAM-related topics, alerting that the "interest in this topic is harmful and problematic." "Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit," Apple noted. "The feature is designed so that Apple does not get access to the messages." The feature, called Communication Safety, is said to be an opt-in setting that must be enabled by parents through the Family Sharing feature. Detection of known CSAM images involves carrying out on-device matching using a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations before the photos are uploaded to the cloud.


AI tool detects child abuse images with 99% accuracy

#artificialintelligence

A new AI-powered tool claims to detect child abuse images with around 99 percent accuracy. The tool, called Safer, is developed by non-profit Thorn to assist businesses which do not have in-house filtering systems to detect and remove such images. According to the Internet Watch Foundation in the UK, reports of child abuse images surged 50 percent during the COVID-19 lockdown. In the 11 weeks starting on 23rd March, its hotline logged 44,809 reports of images compared with 29,698 last year. Many of these images are from children who've spent more time online and been coerced into releasing images of themselves.